44 research outputs found

    Compact exact Lagrangian intersections in cotangent bundles via sheaf quantization

    Full text link
    We show that the cardinality of the transverse intersection of two compact exact Lagrangian submanifolds in a cotangent bundle is bounded from below by the dimension of the Hom space of sheaf quantizations of the Lagrangians in Tamarkin's category. Our sheaf-theoretic method can also deal with clean and degenerate Lagrangian intersections.Comment: 36 pages, 4 figures, with an appendix by Tomohiro Asan

    Microlocal Lefschetz classes of graph trace kernels

    Full text link
    In this paper, we define the notion of graph trace kernels as a generalization of trace kernels. We associate a microlocal Lefschetz class with a graph trace kernel and prove that this class is functorial with respect to the composition of kernels. We apply graph trace kernels to the microlocal Lefschetz fixed point formula for constructible sheaves.Comment: 18 pages, revised, to appear in Publ. RIM

    A note on Hamiltonian stability for sheaves

    Full text link
    We show a strong Hamiltonian stability result for a simpler and larger distance on the Tamarkin category. We also give a stability result with support conditions.Comment: 11 page

    Adaptive Topological Feature via Persistent Homology: Filtration Learning for Point Clouds

    Full text link
    Machine learning for point clouds has been attracting much attention, with many applications in various fields, such as shape recognition and material science. To enhance the accuracy of such machine learning methods, it is known to be effective to incorporate global topological features, which are typically extracted by persistent homology. In the calculation of persistent homology for a point cloud, we need to choose a filtration for the point clouds, an increasing sequence of spaces. Because the performance of machine learning methods combined with persistent homology is highly affected by the choice of a filtration, we need to tune it depending on data and tasks. In this paper, we propose a framework that learns a filtration adaptively with the use of neural networks. In order to make the resulting persistent homology isometry-invariant, we develop a neural network architecture with such invariance. Additionally, we theoretically show a finite-dimensional approximation result that justifies our architecture. Experimental results demonstrated the efficacy of our framework in several classification tasks.Comment: 17 pages with 4 figure

    PersLay: A Neural Network Layer for Persistence Diagrams and New Graph Topological Signatures

    Full text link
    Persistence diagrams, the most common descriptors of Topological Data Analysis, encode topological properties of data and have already proved pivotal in many different applications of data science. However, since the (metric) space of persistence diagrams is not Hilbert, they end up being difficult inputs for most Machine Learning techniques. To address this concern, several vectorization methods have been put forward that embed persistence diagrams into either finite-dimensional Euclidean space or (implicit) infinite dimensional Hilbert space with kernels. In this work, we focus on persistence diagrams built on top of graphs. Relying on extended persistence theory and the so-called heat kernel signature, we show how graphs can be encoded by (extended) persistence diagrams in a provably stable way. We then propose a general and versatile framework for learning vectorizations of persistence diagrams, which encompasses most of the vectorization techniques used in the literature. We finally showcase the experimental strength of our setup by achieving competitive scores on classification tasks on real-life graph datasets
    corecore